Fast and scalable Lasso via stochastic Frank–Wolfe methods with a convergence guarantee

نویسندگان
چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Fast Asynchronous Parallel Stochastic Gradient Descent: A Lock-Free Approach with Convergence Guarantee

Stochastic gradient descent (SGD) and its variants have become more and more popular in machine learning due to their efficiency and effectiveness. To handle large-scale problems, researchers have recently proposed several parallel SGD methods for multicore systems. However, existing parallel SGD methods cannot achieve satisfactory performance in real applications. In this paper, we propose a f...

متن کامل

Scalable Kernel Methods via Doubly Stochastic Gradients

The general perception is that kernel methods are not scalable, so neural nets become the choice for large-scale nonlinear learning problems. Have we tried hard enough for kernel methods? In this paper, we propose an approach that scales up kernel methods using a novel concept called “doubly stochastic functional gradients”. Based on the fact that many kernel methods can be expressed as convex ...

متن کامل

Fast Regression with an `∞ Guarantee∗

Sketching has emerged as a powerful technique for speeding up problems in numerical linear algebra, such as regression. In the overconstrained regression problem, one is given an n × d matrix A, with n d, as well as an n × 1 vector b, and one wants to find a vector x̂ so as to minimize the residual error ‖Ax− b‖2. Using the sketch and solve paradigm, one first computes S · A and S · b for a rand...

متن کامل

Fast Newton methods for the group fused lasso

We present a new algorithmic approach to the group fused lasso, a convex model that approximates a multi-dimensional signal via an approximately piecewise-constant signal. This model has found many applications in multiple change point detection, signal compression, and total variation denoising, though existing algorithms typically using first-order or alternating minimization schemes. In this...

متن کامل

A Non-convex Approach for Sparse Recovery with Convergence Guarantee

In the area of sparse recovery, numerous researches hint that non-convex penalties might induce better sparsity than convex ones, but up until now the non-convex algorithms lack convergence guarantee from the initial solution to the global optimum. This paper aims to provide theoretical guarantee for sparse recovery via non-convex optimization. The concept of weak convexity is incorporated into...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Machine Learning

سال: 2016

ISSN: 0885-6125,1573-0565

DOI: 10.1007/s10994-016-5578-4